Using Fine-tuning and Min Lookahead Beam search to improve Whisper

Andrea Do / Oscar Brown* / Zhengjie Wang / Nikhil Mathew / Zixin Liu / Jawwad Ahmed / Cheng Yu
* Australian National University

Abstract

The performance of Whisper in low-resource languages is still far from perfect. In addition to a lack of training data on low-resource languages, we identify some limitations in the beam search algorithm used in Whisper. To address these issues, we fine-tune Whisper on additional data and propose an improved decoding algorithm. On the Vietnamese language, fine-tuning Whisper-Tiny with LoRA leads to an improvement of 38.49 in WER over the zero-shot Whisper-Tiny setting which is a further reduction of 1.45 compared to full-parameter fine-tuning. Additionally, by using Filter-Ends and Min Lookahead decoding algorithms, the WER reduces by 2.26 on average over a range of languages compared to standard beam search. These results generalise to larger Whisper model sizes. We also prove a theorem that Min Lookahead outperforms the standard beam search algorithm used in Whisper.


Introduction

Whisper has remarkable performance in transcribing multilingual speech audio into text [1]. While its performance with English and other high-resource languages is impressive, the limited availability of training audio data for low-resources languages is a challenge. As Whisper is open-source, researchers may enhance its performance with new training datasets and methods. In this paper, we investigate unconventional fine-tuning and decoding algorithms to improve Whisper’s performance in a low-resource scenario. While fine-tuning is common in practice, a systematic comparison between different fine-tuning strategies for an encoder-decoder model like Whisper has yet to be documented. In the work of Jain et al. [2], the authors froze most of the model’s parameters while finetuned only the final layer.

Conversely, Rouditchenko et al. [3] finetuned the entire model on unseen languages. Both studies lack comprehensive explanations for their choice of fine-tuning strategies. To fill this gap, we conduct a comprehensive study of fine-tuning strategies on Whisper, including full-parameter fine-tuning and partialparameter fine-tuning where gradients are updated only in parts of the model. We selected Vietnamese as our target language, but we believe the results translate to other low-resource languages since we did not utilise any language-specific features in our fine-tuning experiments. Whisper uses a beam search decoding algorithm with beam width n = 5 and log-probability (logprob) as the score function [1]. This is as opposed to the greedy algorithm which chooses the token with the greatest logprob at each decoding step. Although beam search outperforms the greedy algorithm, we suggest it can be further improved by filtering out certain sequences and performing

Conclusion

Despite having less trainable parameters, fine-tuning Whisper- Medium and Whisper-Large with high-rank LoRA yields comparable performance improvements in comparison to full-parameter fine-tuning. The decoupling of input and output embeddings does not harm the model performance but can occasionally surpass the results achieved through full-parameter fine-tuning. Furthermore, we suggest Filter-Ends and Min Lookahead as improvements to Whisper’s decoding algorithm. We prove that Min Lookahead is expected to outperform standard beam search, while empirical results verify this with particularly strong performance on low-resource languages. Future studies should perform fine-tuning experiments on more low-resource languages and investigate increasing the beam’s diversity as a potential improvement to the decoding algorithm.

Previous
Previous

Dynamic Depth Decoding: Faster Speculative Decoding for LLMs